32 research outputs found

    I-HAZE: a dehazing benchmark with real hazy and haze-free indoor images

    Full text link
    Image dehazing has become an important computational imaging topic in the recent years. However, due to the lack of ground truth images, the comparison of dehazing methods is not straightforward, nor objective. To overcome this issue we introduce a new dataset -named I-HAZE- that contains 35 image pairs of hazy and corresponding haze-free (ground-truth) indoor images. Different from most of the existing dehazing databases, hazy images have been generated using real haze produced by a professional haze machine. For easy color calibration and improved assessment of dehazing algorithms, each scene include a MacBeth color checker. Moreover, since the images are captured in a controlled environment, both haze-free and hazy images are captured under the same illumination conditions. This represents an important advantage of the I-HAZE dataset that allows us to objectively compare the existing image dehazing techniques using traditional image quality metrics such as PSNR and SSIM

    An Online Platform for Underwater Image Quality Evaluation

    Get PDF
    With the miniaturisation of underwater cameras, the volume of available underwater images has been considerably increasing. However, underwater images are degraded by the absorption and scattering of light in water. Image processing methods exist that aim to compensate for these degradations, but there are no standard quality evaluation measures or testing datasets for a systematic empirical comparison. For this reason, we propose PUIQE, an online platform for underwater image quality evaluation, which is inspired by other computer vision areas whose progress has been accelerated by evaluation platforms. PUIQE supports the comparison of methods through standard datasets and objective evaluation measures: quality scores for images uploaded on the platform are automatically computed and published in a leaderboard, which enables the ranking of methods. We hope that PUIQE will stimulate and facilitate the development of underwater image processing algorithms to improve underwater images

    Blur-Robust Face Recognition via Transformation Learning

    Full text link
    Abstract. This paper introduces a new method for recognizing faces degraded by blur using transformation learning on the image feature. The basic idea is to transform both the sharp images and blurred im-ages to a same feature subspace by the method of multidimensional s-caling. Different from the method of finding blur-invariant descriptors, our method learns the transformation which both preserves the mani-fold structure of the original shape images and, at the same time, en-hances the class separability, resulting in a wide applications to various descriptors. Furthermore, we combine our method with subspace-based point spread function (PSF) estimation method to handle cases of un-known blur degree, by applying the feature transformation correspond-ing to the best matched PSF, where the transformation for each PSF is learned in the training stage. Experimental results on the FERET database show the proposed method achieve comparable performance a-gainst the state-of-the-art blur-invariant face recognition methods, such as LPQ and FADEIN.

    Mt. Kelud haze removal using color attenuation prior

    Get PDF
    Kelud crater observation using closed-circuit television (CCTV) has not been used as the main guide in the world of volcanology. This is caused by observations manually by volcanologist who is not certain and depends on their ability and experience. In practice, there is still obstacles haze in the image taken from CCTV record. This paper present color attenuation prior method to eliminate haze on the digital image. The results obtained showed that the selected method is capable of eliminating sparse haze and moderate haze but not dense haze

    Single-Scale Fusion: An Effective Approach to Merging Images

    No full text
    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be imple- mented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates impor- tant redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches

    Day and Night-Time Dehazing by Local Airlight Estimation

    No full text
    We introduce an effective fusion-based technique to enhance both day-time and night-time hazy scenes. When inverting the Koschmieder light transmission model, and by contrast with the common implementation of the popular dark-channel [1], we estimate the airlight on image patches and not on the entire image. Local airlight estimation is adopted because,under night-time conditions, the lighting generally arises from multiple localized artificial sources, and is thus intrinsically non-uniform. Selecting the sizes of the patches is, however, non-trivial.Small patches are desirable to achieve fine spatial adaptation to the atmospheric light, but large patches help improve the airlight estimation accuracy by increasing the possibility of capturing pixels with airlight appearance (due to severe haze). For this reason, multiple patch sizes are considered to generate several images, that are then merged together. The discrete Laplacian of the original image is provided as an additional input to the fusion process to reduce the glowing effect and to emphasize the finest image details. Similarly, for day-time scenes we apply the same principle but use a larger patch size. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition.Extensive experimental results demonstrate the effectiveness of our approach as compared with recent techniques, both in terms of computational efficiency and the quality of the outputs

    Night-time dehazing by fusion

    No full text
    We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel prior, we estimate night-time haze computing the airlight component on image patch and not on the entire image. We do this since under night-time conditions, the lighting generally arises from multiple artificial sources, and is thus intrinsically non-uniform. Selecting the size of the patches is non-trivial, since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, this might also induce poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to a multiscale fusion process. Moreover, to reduce the glowing effect and emphasize the finest details, we derive a third input. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. The experimental results demonstrate the effectiveness of our approach compared with recent techniques both in terms of computational efficiency and quality of the outputs

    Multi-scale underwater descattering

    No full text
    Underwater images suffer from severe perceptual/visual degradation, due to the dense and non-uniform medium, causing scattering and attenuation of the propagated light that is sensed. Typical restoration methods rely on the popular Dark Channel Prior to estimate the light attenuation factor, and subtract the back-scattered light influence to invert the underwater imaging model. However, as a consequence of using approximate and global estimates of the back-scattered light, most existing single-image underwater descattering techniques perform poorly when restoring non-uniformly illuminated scenes. To mitigate this problem, we introduce a novel approach that estimates the back-scattered light locally, based on the observation of a neighborhood around the pixel of interest. To circumvent issue related to selection of the neighborhood size, we propose to fuse the images obtained over both small and large neighborhoods, each capturing distinct features from the input image. In addition, the Laplacian of the original image is provided as a third input to the fusion process, to enhance texture details in the reconstructed image. These three derived inputs are seamlessly blended via a multi-scale fusion approach, using saliency, contrast, and saturation metrics to weight each input. We perform an extensive qualitative and quantitative evaluation against several specialized techniques. In addition to its simplicity, our method outperforms the previous art on extreme underwater cases of artificial ambient illumination and high water turbidity

    Physically Plausible Dehazing for Non-physical Dehazing Algorithms

    No full text
    Images affected by haze usually present faded colours and loss of contrast, hindering the precision of methods devised for clear images. For this reason, image dehazing is a crucial pre-processing step for applications such as self-driving vehicles or tracking. Some of the most successful dehazing methods in the literature do not follow any physical model and are just based on either image enhancement or image fusion. In this paper, we present a procedure to allow these methods to accomplish the Koschmieder physical model, i.e., to force them to have a unique transmission for all the channels, instead of the per-channel transmission they obtain. Our method is based on coupling the results obtained for each of the three colour channels. It improves the results of the original methods both quantitatively using image metrics, and subjectively via a psychophysical test. It especially helps in terms of avoiding over-saturation and reducing colour artefacts, which are the most common complications faced by image dehazing methods
    corecore